10 research outputs found

    Two-Stage Multi-Objective Meta-Heuristics for Environmental and Cost-Optimal Energy Refurbishment at District Level

    Get PDF
    Energy efficiency and environmental performance optimization at the district level are following an upward trend mostly triggered by minimizing the Global Warming Potential (GWP) to 20% by 2020 and 40% by 2030 settled by the European Union (EU) compared with 1990 levels. This paper advances over the state of the art by proposing two novel multi-objective algorithms, named Non-dominated Sorting Genetic Algorithm (NSGA-II) and Multi-Objective Harmony Search (MOHS), aimed at achieving cost-effective energy refurbishment scenarios and allowing at district level the decision-making procedure. This challenge is not trivial since the optimisation process must provide feasible solutions for a simultaneous environmental and economic assessment at district scale taking into consideration highly demanding real-based constraints regarding district and buildings’ specific requirements. Consequently, in this paper, a two-stage optimization methodology is proposed in order to reduce the energy demand and fossil fuel consumption with an affordable investment cost at building level and minimize the total payback time while minimizing the GWP at district level. Aimed at demonstrating the effectiveness of the proposed two-stage multi-objective approaches, this work presents simulation results at two real district case studies in Donostia-San Sebastian (Spain) for which up to a 30% of reduction of GWP at district level is obtained for a Payback Time (PT) of 2–3 years.Part of this work has been developed from results obtained during the H2020 “Optimised Energy Efficient Design Platform for Refurbishment at District Level” (OptEEmAL) project, Grant No. 680676

    Normalization Influence on ANN-Based Models Performance: A New Proposal for Features’ Contribution Analysis

    Get PDF
    Artificial Neural Networks (ANNs) are weighted directed graphs of interconnected neurons widely employed to model complex problems. However, the selection of the optimal ANN architecture and its training parameters is not enough to obtain reliable models. The data preprocessing stage is fundamental to improve the model’s performance. Specifically, Feature Normalisation (FN) is commonly utilised to remove the features’ magnitude aiming at equalising the features’ contribution to the model training. Nevertheless, this work demonstrates that the FN method selection affects the model performance. Also, it is well-known that ANNs are commonly considered a “black box” due to their lack of interpretability. In this sense, several works aim to analyse the features’ contribution to the network for estimating the output. However, these methods, specifically those based on network’s weights, like Garson’s or Yoon’s methods, do not consider preprocessing factors, such as dispersion factors , previously employed to transform the input data. This work proposes a new features’ relevance analysis method that includes the dispersion factors into the weight matrix analysis methods to infer each feature’s actual contribution to the network output more precisely. Besides, in this work, the Proportional Dispersion Weights (PWD) are proposed as explanatory factors of similarity between models’ performance results. The conclusions from this work improve the understanding of the features’ contribution to the model that enhances the feature selection strategy, which is fundamental for reliably modelling a given problem.This work was supported in part by DATA Inc. Fellowship under Grant 48-AF-W1-2019-00002, in part by Tecnalia Research and Innovation Ph.D. Scholarship, in part by the Spanish Centro para el Desarrollo Tecnológico Industrial (CDTI, Ministry of Science and Innovation) through the ‘‘Red Cervera’’ Programme (AI4ES Project) under Grant CER-20191029, and in part by the 3KIA Project funded by the ELKARTEK Program of the SPRI-Basque Government under Grant KK-2020/00049

    A Multi-objective Harmony Search Algorithm for Optimal Energy and Environmental Refurbishment at District Level Scale

    Get PDF
    Nowadays municipalities are facing an increasing commitment regarding the energy and environmental performance of cities and districts. The multiple factors that characterize a district scenario, such as: refurbishment strategies’ selection, combination of passive, active and control measures, the surface to be refurbished and the generation systems to be substituted will highly influence the final impacts of the refurbishment solution. In order to answer this increasing demand and consider all above-mentioned district factors, municipalities need optimisation methods supporting the decision making process at district level scale when defining cost-effective refurbishment scenarios. Furthermore, the optimisation process should enable the evaluation of feasible solutions at district scale taking into account that each district and building has specific boundaries and barriers. Considering these needs, this paper presents a multi-objective approach allowing a simultaneous environmental and economic assessment of refurbishment scenarios at district scale. With the aim at demonstrating the effectiveness of the proposed approach, a real scenario of Gros district in the city of Donostia-San Sebastian (North of Spain) is presented. After analysing the baseline scenario in terms of energy performance, environmental and economic impacts, the multi-objective Harmony Search algorithm has been employed to assess the goal of reducing the environmental impacts in terms of Global Warming Potential (GWP) and minimizing the investment cost obtaining the best ranking of economic and environmental refurbishment scenarios for the Gros district.OptEEmAL project, Grant Agreement Number 68067

    Novel Light Coupling Systems Devised Using a Harmony Search Algorithm Approach

    Get PDF
    We report a critical assessment of the use of an Inverse Design (ID) approach steamed by an improved Harmony Search (IHS) algorithm for enhancing light coupling to densely integrated photonic integratic circuits (PICs) using novel grating structures. Grating couplers, performing as a very attractive vertical coupling scheme for standard silicon nano waveguides are nowadays a custom component in almost every PIC. Nevertheless, their efficiency can be highly enhanced by using our ID methodology that can deal simultaneously with many physical and geometrical parameters. Moreover, this method paves the way for designing more sophisticated non-uniform gratings, which not only match the coupling efficiency of conventional periodic corrugated waveguides, but also allow to devise more complex components such as wavelength or polarization splitters, just to cite some

    Soft-Sensor for Class Prediction of the Percentage of Pentanes in Butane at a Debutanizer Column

    Get PDF
    Refineries are complex industrial systems that transform crude oil into more valuable subproducts. Due to the advances in sensors, easily measurable variables are continuously monitored and several data-driven soft-sensors are proposed to control the distillation process and the quality of the resultant subproducts. However, data preprocessing and soft-sensor modelling are still complex and time-consuming tasks that are expected to be automatised in the context of Industry 4.0. Although recently several automated learning (autoML) approaches have been proposed, these rely on model configuration and hyper-parameters optimisation. This paper advances the state-ofthe- art by proposing an autoML approach that selects, among different normalisation and feature weighting preprocessing techniques and various well-known Machine Learning (ML) algorithms, the best configuration to create a reliable soft-sensor for the problem at hand. As proven in this research, each normalisation method transforms a given dataset differently, which ultimately affects the ML algorithm performance. The presented autoML approach considers the features preprocessing importance, including it, and the algorithm selection and configuration, as a fundamental stage of the methodology. The proposed autoML approach is applied to real data from a refinery in the Basque Country to create a soft-sensor in order to complement the operators’ decision-making that, based on the operational variables of a distillation process, detects 400 min in advance with 98.925% precision if the resultant product does not reach the quality standards.This research received no external funding

    Machine learning based adaptive soft sensor for flash point inference in a refinery realtime process

    Get PDF
    In industrial control processes, certain characteristics are sometimes difficult to measure by a physical sensor due to technical and/or economic limitations. This fact is especially true in the petrochemical industry. Some of those quantities are especially crucial for operators and process safety. This is the case for the automotive diesel Flash Point Temperature (FT). Traditional methods for FT estimation are based on the study of the empirical inference between flammability properties and the denoted target magnitude. The necessary measures are taken indirectly by samples from the process and analyzing them in the laboratory, this process implies time (can take hours from collection to flash temperature measurement) and thus make it very difficult for real-time monitorization, which in fact results in security and economical losses. This study defines a procedure based on Machine Learning modules that demonstrate the power of real-time monitorization over real data from an important international refinery. As input, easily measured values provided in real-time, such as temperature, pressure, and hydraulic flow are used and a benchmark of different regressive algorithms for FT estimation is presented. The study highlights the importance of sequencing preprocessing techniques for the correct inference of values. The implementation of adaptive learning strategies achieves considerable economic benefits in the productization of this soft sensor. The validity of the method is tested in the reality of a refinery. In addition, real-world industrial data sets tend to be unstable and volatile, and the data is often affected by noise, outliers, irrelevant or unnecessary features, and missing data. This contribution demonstrates with the inclusion of a new concept, called an adaptive soft sensor, the importance of the dynamic adaptation of the conformed schemes based on Machine Learning through their combination with feature selection, dimensional reduction, and signal processing techniques. The economic benefits of applying this soft sensor in the refinery's production plant and presented as potential semi-annual savings.This work has received funding support from the SPRI-Basque Gov- ernment through the ELKARTEK program (OILTWIN project, ref. KK- 2020/00052)

    Underwater Robot Task Planning Using Multi-Objective Meta-Heuristics

    Get PDF
    Robotics deployed in the underwater medium are subject to stringent operational conditions that impose a high degree of criticality on the allocation of resources and the schedule of operations in mission planning. In this context the so-called cost of a mission must be considered as an additional criterion when designing optimal task schedules within the mission at hand. Such a cost can be conceived as the impact of the mission on the robotic resources themselves, which range from the consumption of battery to other negative effects such as mechanic erosion. This manuscript focuses on this issue by devising three heuristic solvers aimed at efficiently scheduling tasks in robotic swarms, which collaborate together to accomplish a mission, and by presenting experimental results obtained over realistic scenarios in the underwater environment. The heuristic techniques resort to a Random-Keys encoding strategy to represent the allocation of robots to tasks and the relative execution order of such tasks within the schedule of certain robots. The obtained results reveal interesting differences in terms of Pareto optimality and spread between the algorithms considered in the benchmark, which are insightful for the selection of a proper task scheduler in real underwater campaigns

    On the Application of a Single-Objective Hybrid Harmony Search Algorithm to Node Localization in Anchor-based Wireless Sensor Networks

    No full text
    In many applications based on Wireless Sensor Networks (WSNs) with static sensor nodes, the availability of accurate location information of the network nodes may become essential. The node localization problem is to estimate all the unknown node positions, based on noisy pairwise distance measurements of nodes within range of each other. Maximum Likelihood (ML) estimation results in a non-convex problem, which is further complicated by the fact that sufficient conditions for the solution to be unique are not easily identified, especially when dealing with sparse networks. Thereby, different node configurations can provide equally good fitness results, with only one of them corresponding to the real network geometry. This paper presents a novel soft-computing localization technique based on hybridizing a Harmony Search (HS) algorithm with a local search procedure whose aim is to identify the localizability issues and mitigate its effects during the iterative process. Moreover, certain connectivity-based geometrical constraints are exploited to further reduce the areas where each sensor node can be located. Simulation results show that our approach outperforms a previously proposed meta-heuristic localization scheme based on the Simulated Annealing (SA) algorithm, in terms of both localization error and computational cost

    A Novel Heuristic Approach for Distance-and Connectivity-based Multihop Node Localization in Wireless Sensor Networks

    No full text
    The availability of accurate location information of constituent nodes becomes essential in many applications of wireless sensor networks. In this context, we focus on anchor-based networks where the position of some few nodes are assumed to be fixed and known a priori, whereas the location of all other nodes is to be estimated based on noisy pairwise distance measurements. This localization task embodies a non-convex optimization problem which gets even more involved by the fact that the network may not be uniquely localizable, especially when its connectivity is not sufficiently high. To efficiently tackle this problem, we present a novel soft computing approach based on a hybridization of the Harmony Search (HS) algorithm with a local search procedure that iteratively alleviates the aforementioned non-uniqueness of sparse network deployments. Furthermore, the areas in which sensor nodes can be located are limited by means of connectivity-based geometrical constraints. Extensive simulation results show that the proposed approach outperforms previously published soft computing localization techniques in most of the simulated topologies. In particular, to assess the effectiveness of the technique, we compare its performance, in terms of Normalized Localization Error (NLE), to that of Simulated Annealing (SA)-based and Particle Swarm Optimization (PSO)-based techniques, as well as a naive implementation of a Genetic Algorithm (GA) incorporating the same local search procedure here proposed. Non-parametric hypothesis tests are also used so as to shed light on the statistical significance of the obtained results

    On the Design of a Novel Two-Objective Harmony Search Approach for Distance- and Connectivity-based Localization in Wireless Sensor Networks

    No full text
    In several wireless sensor network applications the availability of accurate nodes' location information is essential to make collected data meaningful. In this context, estimating the positions of all unknown-located nodes of the network based on noisy distance-related measurements (usually referred to as localization) generally embodies a non-convex optimization problem, which is further exacerbated by the fact that the network may not be uniquely localizable, especially when its connectivity degree is not sufficiently high. In order to efficiently tackle this problem, we propose a novel two-objective localization approach based on the combination of the harmony search (HS) algorithm and a local search procedure. Moreover, some connectivity-based geometrical constraints are defined and exploited to limit the areas in which sensor nodes can be located. The proposed method is tested with different network configurations and compared, in terms of normalized localization error and three multi-objective quality indicators, with a state-of-the-art metaheuristic localization scheme based on the Pareto archived evolution strategy (PAES). The results show that the proposed approach achieves considerable accuracies and, in the majority of the scenarios, outperforms PAES
    corecore